Lecture � Sloman, Varieties of evolvable minds

Greg Detre

@2 on Monday, 22 January, 2001

Weiskrantz Room (C113), EP

 

Lecture � Sloman, Varieties of evolvable minds1

Introduction�� 1

Site references1

Subtitles1

Overview�� 1

Architectures2

Engineers as philosophers2

Tiny subset of the space of possible virtual machine architectures2

Evolution as a �Designer�2

Design space and niche space3

Different sorts of trajectories through the 2 spaces3

Biological evolution�� 3

Evolution of mind�� 3

Perhaps evolution ����� 3

Even apparently similar animals may have very different information processing VM architectures3

Questions5

 

Introduction

Sloman started in Maths, then philosophy, in Oxford with Ryle + Nagel (graduate students: Searle and Kenny). Then went on to Sussex and AI, and now Birmingham and Computer Science.

Reactive systems (sheepdog and sheep in a sheep-pen)

Deliberative

Meta-management systems

Site references

http://www.es.bham.ac.uk/research/cogaff

www.bham.ac.uk/research/poplog/freepoplog.html

 

Subtitles

How to turn philsophers of mind into engineers

How to think about architectures for human-like and other agents

Overview

Consider:

whole architectures (not just language, vision, learning etc.)

different species (not just humans)

individual differences (infants, brain-damaged etc.)

artefacts (not biological systems)

design requirements (and how they change)

design possibilities (beyond the obvious)

developmental and evolutionary trajectories

multiple disciplines (including philosophy - switch modes of thinking often)

acknowledge conceptual confusion (we don�t necessarily mean what we think we mean)

Think less like a scientist and more like an engineer � trade-offs, and design requirements/possibilities

Architectures

need not be physical architectures � virtual machine architectures

virtual machines = as real as poverty, inflation and other abstract process that impact on our lives

just like the spellchecker in a word-processor or the pieces in a chess game � these don�t exist physically in the computer if you open it up, but they have events and consequences

they are not epiphenomena, but emergent phenomena with causal powers

concepts that are used to describe them are not applicable at the lower level, there are laws but they are not the same laws as at the lower level

indeed, there are also physical events that are affected by virtual machine events

emergent phenomena everywhere:

biosphere, wars, poverty, species, animal kingdom, compilers, computational virtual machines, computers, clouds, organic chemistry, chemistry, physics � all implemented in the underlying deepest levels of physics

Engineers as philosophers

mind/brain vs virtual machine/physical machine

supervenience � philosophical property supervenience is not the notion that is helpful here

but we need to go beyond properties, to enduring events etc. = �mechanism supervenience�

Tiny subset of the space of possible virtual machine architectures

different VM architectures required for minds of different sorts

species, adult/infant, damaged/diseased etc.

need to place normal adult human mental architectures within the broader context of the space of possible minds

i.e. minds with different architectures that:

������ meet different sets of requirements

fit different niches

Evolution as a �Designer�

but not a deliberative designer

it becomes somewhat explicit in its designs once you evolve intelligent species

Design space and niche space

relations between designs + requirements (niches). multi-dimensional complex relationships that can�t be summarily quantified as just �fitness functions�. topological discontinuities

Different sorts of trajectories through the 2 spaces

i-trajectory

possible for an individual organism/machine

development/adaptation/learning processes: egg to chicken, acorn to oak tree

e-trajectory

sequence of designs evolving through natural/artifical evolution. multiple re-starts in slightly different locations

r-trajectory

system being repaired or built by external designers, turning non-functioning part-built systems into functional wholes

s-trajectory

possible for social systems with multiple communicating individuals (= a tyep of I-trajectory)

all but r-trajectories are constrained by the requirement for �viable� systems at every stage, though both e- and r-trajectories are discontinuous

Biological evolution

multiple interacting e-trajectories

later using i-trajectories

then s-trajectories

and now also r-trajectories

why so few �inteligent� species/individuals

Evolution of mind

different mental concepts are applicable in different architectures

if mental concepts are architecture-based then we cannot use our concepts to understand �what it is like� to be a fly, a bat or a new-born baby

Perhaps evolution �

perhaps evolution designed babies with the ability to fool parents into treating them as humans while they build their human architecture

Even apparently similar animals may have very different information processing VM architectures

precocial species � born/hatched ready to feed, walk, swim, run etc (e.g. chickens, deer, horses etc.)

altricial species � helpless, need days/weeks/months to grow their software architectures (e.g. eagles, chimps, humans etc.)

Why are precocial and altricial species so different?

so we need different sets of concepts to describe what a lion sees and what a deer sees

why do humans take so long to mature?

not because we�d kill our mothers with too-big skulls (we�d become more like elephants)

because we get a much deeper grasp of space, time etc. which we need to be human, and takes longer to develop

if evolution cannot predesign all the intricate mechanisms, it can instead use a bootstrapping architecture

obviously a continuum not a dichotomy � families of architecture

What kind of machine can have emotions?

problem: many different definitions of emotions (between and within each discipline), which concentrate on different phenomena

e.g. in psychology: on the basis of brain processes, or physiological processes, environment/behavioural interaction, what your conscious of

Sartre: to have an emotion = �to see the world as magical�

what are the architectural requirements for human-like mental states and processes?

machines which have such architectures will be able to have human-like emotions

3 classes of emotions linked to different layers in the architecture evolved at different times: primary, secondary and tertiary emotions (+ moods and other affective states)

Can biological evolution produce an unintelligible mess (that works)?

yes, in principle

as indicated by computational evolution � the solution works, but not in a way that any human would have designed, and very difficult to understand � not modular

but for more complicated systems, there is a requirement/drive towards modularity, otherwise little changes here and there would have ramifications everywhere

The �CogAff� Architecture Schema

= cognitive + affect

refers to a space of architectures

�triple tower� = input-central-output

�triple layer� = three layers of evolution

plus alarms and other components

The �triple tower� view

see Nilsson (Introduction to AI???), Albus (Minds, brains and robots???)

Levels in perceptual mechanisms

Necker cube � complete explanation in terms of geometrical percepts

duck-rabbit � uses far more subtle and abstract percepts, going beyond geometric and physical properties (compare Marr on vision), and which way it is facing

that�s because we�ve evolved to see other organisms as sentient

we have specialised, automatically systems operating to produce 3D and animate interpretations of these images

Evolution of perceptual mechanisms

a mind (or brain) is a co-evolved ecosystem

One of many layered views

triune brain � reptilian, old mammalian, new mammalian

reactive mechanisms (oldest)

deliberative reasoning - �what if� mechanisms (older)

meta-management � reflective processes (newest)

reactive systems simply need to produce behaviours. deliberative reasoning requires a representation of the behaviour, otherwise you have to try all the behaviours out in series. one behaviour might have killed you � �the hypothesis dies in your stead�

duplicate and differentiate???

difficult to tell whether there�s deliberative reasoning � anything from the higher levels can be produced in a complex version of the lower levels if evolution has had a chance to incorporate it genetically

Layered architectures have many variants

The �Omega� model of information flow

see Shallice, Norman, Cooper

pipeline of information flow going through higher levels and back out through pinholes (maybe with a �will� at the top of the omega)

in contrast with all the multiple interactions of Sloman�s model

 

perception

central processing

action

 

meta-management

 

 

deliberative reasoning

 

 

reactive processing

 

Another variant � subsumption architectures

adds extra interactions between/within levels

Brooks denies that animals use deliberative mechanisms/

(How does he get to overseas conferences?)???

Multiple sources of control with changing dominance relationships

As processing grows more sophisticated, it can be come slower

to the point of danger

fast, powerful �alarm systems� are needed - stupid, pattern-matching, sometimes trainable

general global alarms, local alarms, very specialised alarms (e.g. protective blinking reflex)

Extra mechanisms needed

The need for �inner languages�

CogAff is a schema

not all components are present in all animals

Conclusions: for Science and engineering

consider an �eco-system of mind� rather than just a �society of mind�

Q&A

can we talk about �pain� for cats and humans in the same way?

yes, to an extent

where does language fit in with the tripartite model

it adds to the three that may already exist, adding (descriptive) power to the deliberative reasoniing and more steps, for isntance.

language is used primarily to think with, and communication is better understood as shared thinking.

how does this fit in with Wittgenstein?

 

 

Questions

what is co-evolution?

how do the r-trajectories arise in a natural system?

the key thing is robustness??? (why) is modular more robust?

how do you fit functionalism into the mind-body problem?

are there any reptiles (or birds) that can do deliberative reasoning?